11 research outputs found

    Adaptive Q-learning-supported Resource Allocation Model in Vehicular Fogs

    Get PDF
    Urban computing has become a significant driver in supporting the delivery and sharing of services, being a strong ally to intelligent transportation. Smart vehicles present computing and communication capabilities that allow them to enable many autonomous vehicular safety and infotainment applications. Vehicular Cloud Computing (VCC) has already proven to be a technology shifting paradigm harnessing the computation resources from on board units from vehicles to form clustered computing units to solve real world computing problems. However, with the rise of vehicular application use and intermittent network conditions, VCC exhibits many drawbacks. Vehicular Fog computing appears as a new paradigm in enabling and facilitating efficient service and resource sharing in urban environments. Several vehicular resource management works have attempted to deal with the highly dynamic vehicular environment following diverse approaches, e.g. MDP, SMDP, and policy-based greedy techniques. However, the high vehicular mobility causes several challenges compromising consistency, efficiency, and quality of service. RL-enabled adaptive vehicular Fogs can deal with the mobility for properly distributing load and resources over Fogs. Thus, we propose a mobility-based cloudlet dwell time estimation method for accurately estimating vehicular resources in a Fog. Leveraging the CDT estimation model, we devise an adaptive and highly dynamic resource allocation model using mathematical formula for Fog selection, and reinforcement learning for iterative review and feedback mechanism for generating optimal resource allocation policy

    Towards robust convolutional neural networks in challenging environments

    Get PDF
    Image classification is one of the fundamental tasks in the field of computer vision. Although Artificial Neural Network (ANN) showed a lot of promise in this field, the lack of efficient computer hardware subdued its potential to a great extent. In the early 2000s, advances in hardware coupled with better network design saw the dramatic rise of Convolutional Neural Network (CNN). Deep CNNs pushed the State-of-The-Art (SOTA) in a number of vision tasks, including image classification, object detection, and segmentation. Presently, CNNs dominate these tasks. Although CNNs exhibit impressive classification performance on clean images, they are vulnerable to distortions, such as noise and blur. Fine-tuning a pre-trained CNN on mutually exclusive or a union set of distortions is a brute-force solution. This iterative fine-tuning process with all known types of distortion is, however, exhaustive and the network struggles to handle unseen distortions. CNNs are also vulnerable to image translation or shift, partly due to common Down-Sampling (DS) layers, e.g., max-pooling and strided convolution. These operations violate the Nyquist sampling rate and cause aliasing. The textbook solution is low-pass filtering (blurring) before down-sampling, which can benefit deep networks as well. Even so, non-linearity units, such as ReLU, often re-introduce the problem, suggesting that blurring alone may not suffice. Another important but under-explored issue for CNNs is unknown or Open Set Recognition (OSR). CNNs are commonly designed for closed set arrangements, where test instances only belong to some ‘Known Known’ (KK) classes used in training. As such, they predict a class label for a test sample based on the distribution of the KK classes. However, when used under the OSR setup (where an input may belong to an ‘Unknown Unknown’ or UU class), such a network will always classify a test instance as one of the KK classes even if it is from a UU class. Historically, CNNs have struggled with detecting objects in images with large difference in scale, especially small objects. This is because the DS layers inside a CNN often progressively wipe out the signal from small objects. As a result, the final layers are left with no signature from these objects leading to degraded performance. In this work, we propose solutions to the above four problems. First, we improve CNN robustness against distortion by proposing DCT based augmentation, adaptive regularisation, and noise suppressing Activation Functions (AF). Second, to ensure further performance gain and robustness to image transformations, we introduce anti-aliasing properties inside the AF and propose a novel DS method called blurpool. Third, to address the OSR problem, we propose a novel training paradigm that ensures detection of UU classes and accurate classification of the KK classes. Finally, we introduce a novel CNN that enables a deep detector to identify small objects with high precision and recall. We evaluate our methods on a number of benchmark datasets and demonstrate that they outperform contemporary methods in the respective problem set-ups.Doctor of Philosoph

    Predicting Temperature of Major Cities Using Machine Learning and Deep Learning

    Full text link
    Currently, the issue that concerns the world leaders most is climate change for its effect on agriculture, environment and economies of daily life. So, to combat this, temperature prediction with strong accuracy is vital. So far, the most effective widely used measure for such forecasting is Numerical weather prediction (NWP) which is a mathematical model that needs broad data from different applications to make predictions. This expensive, time and labor consuming work can be minimized through making such predictions using Machine learning algorithms. Using the database made by University of Dayton which consists the change of temperature in major cities we used the Time Series Analysis method where we use LSTM for the purpose of turning existing data into a tool for future prediction. LSTM takes the long-term data as well as any short-term exceptions or anomalies that may have occurred and calculates trend, seasonality and the stationarity of a data. By using models such as ARIMA, SARIMA, Prophet with the concept of RNN and LSTM we can, filter out any abnormalities, preprocess the data compare it with previous trends and make a prediction of future trends. Also, seasonality and stationarity help us analyze the reoccurrence or repeat over one year variable and removes the constrain of time in which the data was dependent so see the general changes that are predicted. By doing so we managed to make prediction of the temperature of different cities during any time in future based on available data and built a method of accurate prediction. This document contains our methodology for being able to make such predictions.Comment: 15 pages, 31 figure

    Distortion Robust Image Classification using Deep Convolutional Neural Network with Discrete Cosine Transform

    Full text link
    Convolutional Neural Network is good at image classification. However, it is found to be vulnerable to image quality degradation. Even a small amount of distortion such as noise or blur can severely hamper the performance of these CNN architectures. Most of the work in the literature strives to mitigate this problem simply by fine-tuning a pre-trained CNN on mutually exclusive or a union set of distorted training data. This iterative fine-tuning process with all known types of distortion is exhaustive and the network struggles to handle unseen distortions. In this work, we propose distortion robust DCT-Net, a Discrete Cosine Transform based module integrated into a deep network which is built on top of VGG16. Unlike other works in the literature, DCT-Net is "blind" to the distortion type and level in an image both during training and testing. As a part of the training process, the proposed DCT module discards input information which mostly represents the contribution of high frequencies. The DCT-Net is trained "blindly" only once and applied in generic situation without further retraining. We also extend the idea of traditional dropout and present a training adaptive version of the same. We evaluate our proposed method against Gaussian blur, motion blur, salt and pepper noise, Gaussian noise and speckle noise added to CIFAR-10/100 and ImageNet test sets. Experimental results demonstrate that once trained, DCT-Net not only generalizes well to a variety of unseen image distortions but also outperforms other methods in the literature

    Impact of COVID-19 Lockdowns on air quality in Bangladesh : Analysis and AQI forecasting with support vector regression

    Get PDF
    Over the past few decades, air pollution has emerged as a significant environmental hazard, causing premature deaths in Southeast Asia. The proliferation of industrialization and deforestation has resulted in an alarming increase in pollution levels. However, the COVID-19 pandemic has significantly reduced the amount of volatile organic compounds and toxic gases in the air due to the decrease in human activity caused by lockdowns and restrictions. This study aims to investigate the air quality in various geographical areas of Bangladesh, comparing the air quality index (AQI) during different lockdown periods to equivalent eight-year time spans in 10 of the country's busiest cities. This study demonstrates a strong correlation between the rapid and widespread dispersion of COVID-19 and air pollution reduction in Bangladesh. In addition, we evaluated the performance of Support Vector Regression (SVR) in AQI forecasting using the time series dataset. The results can help improve machine learning and deep learning models for accurate AQI forecasting. This study contributes to developing effective policies and strategies for reducing air pollution in Bangladesh and other countries facing similar challenges

    Efficacy of fungicides against grain spot disease in rice (Oryza sativa)

    Get PDF
    A field study was conducted to evaluate the efficacy of the five selected fungicides viz., ipordione (rovral), tebuconazole (folicur), and hexaconazole (Hayconazole, Orazole and Titan) against grain spot disease in rice. The experiment was conducted at the Field Laboratory, Bangladesh Rice Research Institute (BRRI), Gazipur and two regional stations at Barisal and Satkhira in Bangladesh. A high yielding cultivar BRRIdhan28 was selected for the study as test crop. Percentage of unfilled grain at different experimental sites showed a large variation from 4.4 to 14.1% in control. The lowest unfilled grain found in Titan treated plot at Barisal (13.6%) and Satkhira (3.8%), while the lowest unfilled grain (7.0%) found in orazole treated plot at Gazipur. Application of rovral (12.4%) and folicur (15.8%) reduced the percentage of spotted grain as compared with control (23.6%) in Satkhira site. The average 100 grain weight considerably varied among the sites and was comparatively higher in Satkhira. Seed health test revealed low rate of seed infection with Bipolaris oryzae, Curvularia lunata, Alternaria tenuis and Trichoconis padwickii irrespective of the sites. Seed infection with B. oryzae at Barisal, Gazipur and Satkhira in different treatments ranged from 2.88-4.34%, 3.50-5.67% and 1.10-2.55%, respectively. In all the sites, the highest incidence of C. lunata was observed in control treatment but the differences with fungicide application were non-significant, which indicated a minimal effect of tested fungicides against C. lunata. Tested fungicides were found ineffective in controlling A. tenuis and T. padwickii. The results showed that incidence of all the above mentioned pathogens in seed were comparatively higher at Gazipur site

    BackNet: An Enhanced backbone network for accurate detection of objects with large scale variations

    No full text
    Deep Convolutional Neural Networks (CNNs) have induced significant progress in the field of computer vision including object detection and classification. Two-stage detectors like Faster RCNN and its variants are found to be more accurate compared to its one-stage counter-parts. Faster RCNN combines an ImageNet pretrained backbone network (e.g VGG16) and a Region Proposal Network (RPN) for object detection. Although Faster RCNN performs well on medium and large scale objects, detecting smaller ones with high accuracy while maintaining stable performance on larger objects still remains a challenging task. In this work, we focus on designing a robust backbone network for Faster RCNN that is capable of detecting objects with large variations in scale. Considering the difficulties posed by small objects, our aim is to design a backbone network that allows signals extracted from small objects to be propagated right through to the deepest layers of the network. This being our motivation, we propose a robust network: BackNet, which can be integrated as a backbone into any two-stage detector. We evaluate the performance of BackNet-Faster RCNN on MS COCO dataset and show that the proposed method outperforms five contemporary methods

    Dynamics of Digital Pen-Tablet: Handwriting Analysis for Person Identification Using Machine and Deep Learning Techniques

    No full text
    Handwriting is controlled by neurons in the brain’s nervous system, reflecting an individual’s personality and psychology. This unique characteristic can be used for various applications, including user authentication, assessment of neurodegenerative disorders, and classification of handedness, gender, and age groups. Traditional authentication systems require memorization, information leakage, and fingerprints, making them vulnerable to security breaches. The majority of researchers have studied the limitations of image quality, camera frames, and light effects on text and image-dependent performance. Therefore, this paper mainly focused on real-time, text-independent handwriting fine-motor data and proposed an efficient authentication system with low cost using efficient feature extraction and optimal feature selection approaches. This research utilizes two benchmark databases, including the handwriting data of 48 (24+24) participants collected via a sensor-based pen tablet. Each participant wrote the 10 words five times repeatedly, making it a total of 2400 samples. The handwriting classification of the different individuals is in 3 phases: feature extraction, feature selection, and classification. A total of 91 features (statistical, kinematic, spatial, and composite) were extracted from more accurate, real-time numerical handwriting data. The efficient and optimal features have been selected using four feature selection approaches, namely, Pearson’s r correlation, ANOVA-F, Mutual Information Gain, and PCA, among which the ANOVA-F test and PCA perform well for handwriting-extracted data. Then, 14 machine learning (ML) models and 7 deep learning (DL) models were applied to handle the problem of individual classification, with both no- and full-feature-selection scenarios considered. The experimental analysis has been conducted with different angles and perspectives, such as K-Fold cross-validation, testing system efficiency considering 5/10/15/24/48 individuals, and in the case of individual tasks. It shows that ML-based algorithms, namely, CATBOOST (99.07%) with ANOVA-F and DL-based models, namely, BiLSTM (98.31%) with PCA-selected features, provide the highest accuracy with dataset 2, among others that advocate the practicality and reliability of choosing this system for user identification

    Atomic Orbital Search Algorithm for Efficient Maximum Power Point Tracking in Partially Shaded Solar PV Systems

    No full text
    The efficient extraction of solar PV power is crucial to maximize utilization, even in rapidly changing environmental conditions. The increasing energy demands highlight the importance of solar photovoltaic (PV) systems for cost-effective energy production. However, traditional PV systems with bypass diodes at their output terminals often produce multiple power peaks, leading to significant power losses if the optimal combination of voltage and current is not achieved. To address this issue, algorithms capable of finding the highest value of a function are employed. Since the PV power output is a complex function with multiple local maximum power points (LMPPs), conventional algorithms struggle to handle partial shading conditions (PSC). As a result, nature-inspired algorithms, also known as metaheuristic algorithms, are used to maximize the power output of solar PV arrays. In this study, we introduced a novel metaheuristic algorithm called atomic orbital search for maximum power point tracking (MPPT) under PSC. The primary motivation behind this research is to enhance the efficiency and effectiveness of MPPT techniques in challenging scenarios. The proposed algorithm offers several advantages, including higher efficiency, shorter tracking time, reduced output variations, and improved duty ratios, resulting in faster convergence to the maximum power point (MPP). To evaluate the algorithm’s performance, we conducted extensive experiments using Typhoon HIL and compared it with other existing algorithms commonly employed for MPPT. The results clearly demonstrated that the proposed atomic orbital search algorithm outperformed the alternatives in terms of rapid convergence and efficient MPP tracking, particularly for complex shading patterns. This makes it a suitable choice for developing an MPP tracker applicable in various settings, such as industrial, commercial, and residential applications. In conclusion, our research addresses the pressing need for effective MPPT methods in solar PV systems operating under challenging conditions. The atomic orbital search algorithm showcases its potential in significantly improving the efficiency and performance of MPPT, ultimately contributing to the optimization of solar energy extraction and utilization

    Enhancing the effectiveness of local descriptor based image matching

    No full text
    Image registration has received great attention from researchers over the last few decades. SIFT (Scale Invariant Feature Transform), a local descriptor-based technique is widely used for registering and matching images. To establish correspondences between images, SIFT uses a Euclidean Distance ratio metric. However, this approach leads to a lot of incorrect matches and eliminating these inaccurate matches has been a challenge. Various methods have been proposed attempting to mitigate this problem. In this paper, we propose a scale and orientation harmony-based pruning method that improves image matching process by successfully eliminating incorrect SIFT descriptor matches. Moreover, our technique can predict the image transformation parameters based on a novel adaptive clustering method with much higher matching accuracy. Our experimental results have shown that the proposed method has achieved averages of approximately 16% and 10% higher matching accuracy compared to the traditional SIFT and a contemporary method respectively.2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 201
    corecore